Bayesian inference for high-dimensional linear regression under mnet priors

نویسندگان

  • Aixin Tan
  • Jian Huang
چکیده

Abstract: For regression problems that involve many potential predictors, the Bayesian variable selection (BVS) method is a powerful tool, which associates each model with its posterior probabilities, and achieves superb prediction performance through Bayesian model averaging (BMA). Two challenges of using such models are, specifying a suitable prior, and computing posterior quantities for inference. We contribute to the literature of BVS modeling in the following aspects. First, we propose a new family of priors, called the mnet prior, which is indexed by a few hyperparameters that allow great flexibility in the prior density. The hyperparameters can also be treated as random, so that their values need not be tuned manually, but will instead adapt to the data. Simulation studies are used to demonstrate good prediction and variable selection performances of these models. Secondly, the analytical expression of the posterior distribution is unavailable for the BVS model under the mnet prior in general, as is the case for most BVS models. We develop an adaptive Markov chain Monte Carlo (MCMC) algorithm that facilitates the computation in high dimensional regression problems. Finally, we showcase various ways to do inference with BVS models, highlighting a new way to visualize the importance of each predictor, along with estimation of the coefficients and their uncertainties. These are demonstrated through the analysis of a breast cancer dataset. The Canadian Journal of Statistics xx: 1–??; 20?? c © 20?? Statistical Society of Canada Résumé: Insérer votre résumé ??? La revue canadienne de statistique xx: 1–??; 20?? c © 20?? Société statistique du Canada

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Supplement to “ Bayesian inference for high - dimensional linear regression under the mnet priors ”

Here, a q-dim binary vector γ = (γ1, . . . , γq) ∈ {0, 1}q =: Γ indicates a selected set of predictors, and βγ denotes the subvector of coefficients for the predictors selected by γ. Prior of the BVS model are specified in (3b) – (3d). In this BVS model, (3c) specifies a uniform prior on log σ, a typical choice of non-informative prior for these parameters in linear regression models. And (3d) ...

متن کامل

Bayesian Inference for Spatial Beta Generalized Linear Mixed Models

In some applications, the response variable assumes values in the unit interval. The standard linear regression model is not appropriate for modelling this type of data because the normality assumption is not met. Alternatively, the beta regression model has been introduced to analyze such observations. A beta distribution represents a flexible density family on (0, 1) interval that covers symm...

متن کامل

Location Reparameterization and Default Priors for Statistical Analysis

This paper develops default priors for Bayesian analysis that reproduce familiar frequentist and Bayesian analyses for models that are exponential or location. For the vector parameter case there is an information adjustment that avoids the Bayesian marginalization paradoxes and properly targets the prior on the parameter of interest thus adjusting for any complicating nonlinearity the details ...

متن کامل

Bayesian Feature Selection with Strongly Regularizing Priors Maps to the Ising Model

Identifying small subsets of features that are relevant for prediction and classification tasks is a central problem in machine learning and statistics. The feature selection task is especially important, and computationally difficult, for modern data sets where the number of features can be comparable to or even exceed the number of samples. Here, we show that feature selection with Bayesian i...

متن کامل

Heuristics as a special case of Bayesian Inference

Abstract: Probabilistic inference models (e.g. Bayesian models) are often cast as being rational and at odds with simple heuristic approaches. We show that prominent decision heuristics, take-the-best and tallying, are special cases of Bayesian inference. We developed two Bayesian learning models by extending two popular regularized regression approaches, lasso and ridge regression. The priors ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016